-
Notifications
You must be signed in to change notification settings - Fork 473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: OCPNODE-2462 - Split Filesystem OCP Enhancement #1657
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
eb11659
to
957070f
Compare
- https://github.com/containers/storage/pull/1885 | ||
|
||
- How does one delete all images and containers once the container runtime config is changed? | ||
- crictl on all images and containers on each node? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could be an out-of-scope item. There is an argument that externally managed storage is owned by the customer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main reason for listing this out is what happens if you change from no image store to a new location for image store. Kubelet GC of the existing images would not clean that up.
And tbh crictl
wouldn't either. I guess we would require manual deletion if they ran out of disk space.
Technically, I think we should reserve, The split-filesystem.sh script would:
Node Team's scope starts at IMAGE_STORE partition label... This partition label could be setup by CAPI or by documentation or by some other external entity. |
5ce0e41
to
1a2a504
Compare
|
||
```storage.bu | ||
variant: openshift | ||
version: 4.14.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this work with 4.17.0
? if so we should use the newest version IMO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interestingly enough I only got this to work with 4.14.
|
||
Since the image cache has changed locations, all the old images left over should be removed. | ||
|
||
Simplest option is to remove the images on each node that this feature was enabled. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should spell out what to do with the containers (which would be left over after the reboot), especially because a lot of those containers will require a running network pod to remove
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have this sorta as an open question.
Do we manually delete all of /var/lib/container/storage on enablement of this feature or disable?
I think using a tool like crictl
may actually not work if we change the container runtime. For podman changing these locations does orphan those containers/images.
I think a manual rm may be the best thing we can do.. Unless we delete before enablement of the feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we want the old images removed?
1a2a504
to
31d1b59
Compare
31d1b59
to
3252e2d
Compare
|
||
Since the image cache has changed locations, all the old images left over should be removed. | ||
|
||
Simplest option is to remove the images on each node that this feature was enabled. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we want the old images removed?
We are going to take this enhancement to a meeting to discuss further design. |
Inactive enhancement proposals go stale after 28d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Mark the proposal as fresh by commenting If this proposal is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
@kannon92: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Inactive enhancement proposals go stale after 28d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Mark the proposal as fresh by commenting If this proposal is safe to close now please do so with /lifecycle stale |
Stale enhancement proposals rot after 7d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Mark the proposal as fresh by commenting If this proposal is safe to close now please do so with /lifecycle rotten |
Rotten enhancement proposals close after 7d of inactivity. See https://github.com/openshift/enhancements#life-cycle for details. Reopen the proposal by commenting /close |
@openshift-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
This is a OCP enhancement for enabling KEP-4191 in openshift.